Electroencephalography (EEG) has gained prominence as a non-invasive and efficient technique for decoding human emotional states based on neural activity. This paper presents a structured machine learning-based framework for emotion classification using EEG signals elicited by emotionally stimulating videos. Leveraging the SEED dataset, the study focuses on extracting meaningful features from multi-channel EEG recordings and classifying emotional states into positive, neutral, and negative categories. The methodology integrates advanced signal preprocessing, dimensionality reduction, and feature extraction techniques such as Differential Entropy (DE), Power Spectral Density (PSD), and Hjorth parameters. Machine learning classifiers including Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and Multi-Layer Perceptrons (MLP) are employed and compared for performance. Among these, the MLP model demonstrated superior accuracy, capturing nonlinear patterns effectively across EEG channels. Model evaluation is conducted using standard metrics such as accuracy, F1-score, and confusion matrix. The results confirm that the proposed EEG-based framework achieves robust emotion classification with significant implications for mental health monitoring, affective computing, and human-computer interaction applications.
Introduction
Emotion recognition is vital for fields like mental health, human-computer interaction, and affective computing. Traditional methods (facial expressions, speech, text) suffer from cultural biases and suppression. EEG offers a more objective and real-time measure by capturing brainwave patterns, making it well-suited for detecting emotional states.
2. Motivation & Objectives
Leverage the SEED dataset (emotionally stimulated EEG recordings).
Classify three emotional states: Positive, Neutral, Negative.
Use classical machine learning models (SVM, KNN, MLP) for computational efficiency and interpretability.
Extract features such as Differential Entropy (DE) and Power Spectral Density (PSD), and apply PCA and RFE for dimensionality reduction.
3. Literature Review
EEG over traditional methods: Provides higher temporal resolution and resilience to external masking.
Evolution of the field: From statistical ERP-based methods to ML/DL algorithms.
Common Features: DE, PSD, Hjorth parameters, wavelets, and statistical metrics.
SEED Dataset: 62-channel EEG, 15 subjects, film clips labeled with emotions. High-resolution, validated dataset.
Research Gaps: Subject variability, inconsistent pipelines, limited real-time deployment, deep learning's high computational cost.
4. Methodology
Preprocessing: Bandpass filtering (1–50 Hz), artifact removal, and selection of emotion-related channels (e.g., F3, F4, T7, O1).
Hardware: Minimum 8 GB RAM; GPU optional but beneficial.
Pipeline: From raw EEG to feature vector to classification.
7. Results
Best performance: Achieved by MLP (97.18% accuracy).
Metrics: High F1-score, precision, recall across all emotion classes.
Visualization: Confusion matrices and prediction graphs showed effective classification.
SVM and KNN: Performed well but lagged behind MLP in cross-subject generalization.
Key Contributions
Effective use of entropy and spectral features for EEG-based emotion detection.
Demonstrated high accuracy with lightweight machine learning models.
Provided a framework suitable for real-time deployment in applications like mental health monitoring, emotion-aware agents, and adaptive learning.
Conclusion
This study presents an effective and computationally efficient approach for EEG-based emotion recognition using machine learning techniques. By leveraging the SEED dataset and implementing a modular pipeline that includes signal preprocessing, feature extraction, and classification, the proposed framework successfully decodes emotional states into Positive, Neutral, and Negative categories.
Among the evaluated classifiers, the Multi-Layer Perceptron (MLP) model demonstrated superior accuracy and generalization capability compared to traditional models like Support Vector Machine (SVM) and K-Nearest Neighbors (KNN). The inclusion of features such as Differential Entropy (DE), Power Spectral Density (PSD), and Hjorth parameters significantly enhanced the classification performance, while dimensionality reduction techniques like Principal Component Analysis (PCA) and Recursive Feature Elimination (RFE) helped mitigate overfitting and improved efficiency.
The results, supported by strong performance metrics including accuracy and F1-score, confirm the viability of EEG as a reliable modality for affective state detection. This research contributes to the growing body of work in affective computing and has potential applications in mental health monitoring, brain-computer interfaces (BCIs), adaptive learning systems, and emotion-aware human-computer interaction.
Future work will explore real-time emotion recognition, integration of deep learning models such as LSTM and CNNs, and cross-dataset validation for broader applicability.
References
[1] Y. Yang, Q. Wu, M. Qiu, Y. Wang, and X. Chen, “Emotion recognition from multi-channel EEG through parallel convolutional recurrent neural network,” in Proc. Int. Joint Conf. Neural Networks (IJCNN), 2018, pp. 1–7.
[2] Y.-P. Lin, C.-H. Wang, T.-L. Wu, S.-K. Jeng, and J.-H. Chen, “EEG based emotion recognition in music listening: A comparison of schemes for multiclass support vector machine,” in IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2009, pp. 489–492.
[3] C. A. Frantzidis, C. Bratsas, C. L. Papadelis, E. Konstantinidis, C. Pappas, and P. D. Bamidis, “Toward emotion aware computing: An integrated approach using multichannel neurophysiological recordings and affective visual stimuli,” IEEE Trans. Inf. Technol. Biomed., vol. 14, no. 3, pp. 589–597, 2010.
[4] S. M. Alarcão and M. J. Fonseca, “Emotion recognition from EEG signals: A survey,” IEEE Trans. Affective Comput., vol. 10, no. 3, pp. 374–393, 2019.
[5] M. Murugappan, M. Rizon, R. Nagarajan, and S. Yaacob, “Inferring of human emotional states using multichannel EEG,” Eur. J. Sci. Res., vol. 48, no. 2, pp. 281–299, 2010.
[6] R.-N. Duan, J.-Y. Zhu, and B.-L. Lu, “Differential entropy features for EEG-based emotion classification,” in Proc. Int. IEEE/EMBS Conf. Neural Eng. (NER), 2013, pp. 81–84.